With the increasing integration of digital platforms in education, assessing critical writing skills in Computer Mediated Communication (CMC) contexts has become imperative—yet despite the widespread use of rubrics in language assessment, no validated instrument existed specifically for evaluating Iranian EFL learners’ critical writing in CMC environments; to address this gap, this study developed and rigorously validated an analytic scoring rubric grounded in Paul and Elder’s (2019) Intellectual Standards, engaging a robust participant pool of 236 Iranian EFL learners and 10 experienced EFL/ESL instructors to ensure both learner relevance and expert credibility across diverse institutional and proficiency levels; the rubric’s development followed a multi-phase, iterative process: initial item generation drew on theoretical foundations and empirical literature, followed by expert review for content validity, thematic analysis of semi-structured interviews with instructors and learners to capture context-specific challenges and expectations in digital writing, and two rounds of pilot testing with refinement based on inter-rater discrepancies and user feedback; the final rubric comprises four theoretically and empirically coherent domains—(1) Clarity, Accuracy, and Precision (CAP), targeting surface-level rigor and expression; (2) Relevance and Logic (RL), assessing coherence and argumentative soundness; (3) Depth and Significance (DS), evaluating substantive inquiry and problem engagement; and (4) Breadth and Fairness (BF), measuring perspective-taking and avoidance of bias— each operationalized through descriptive performance levels (e.g., novice to exemplary); statistical validation employed exploratory and confirmatory factor analyses to verify dimensionality, revealing a stable four-factor structure accounting for 78.4% of total variance, while structural equation modeling (SEM) confirmed strong model fit (CFI = .962, TLI = .951, RMSEA = .047), supporting construct validity; reliability analyses yielded high Cronbach’s alpha values (α = .89–.94 across subscales) and strong inter-rater agreement (ICC = .91), indicating excellent internal consistency and scoring stability; qualitative insights further affirmed the rubric’s usability, transparency, and alignment with real-world CMC writing demands, such as forum posts, blog responses, and argumentative online discussions; thus, the instrument not only fills a methodological void in EFL assessment but also serves as a formative tool that scaffolds metacognitive awareness and dialogic reasoning in digitally mediated academic writing; its contextual grounding in the Iranian EFL higher education landscape enhances ecological validity, while its theoretical fidelity ensures transferability to other CMC-based EFL/ESL settings; ultimately, the study offers language teachers a practical, evidence-based resource for evaluating and nurturing critical literacy in online environments, empowers learners with clear criteria for self-regulated improvement, and provides researchers with a psychometrically sound framework for future investigations into digital critical discourse.